Can machine translation systems be evaluated by the crowd alone
نویسندگان
چکیده
منابع مشابه
Graham, Yvette, Timothy Baldwin, Alistair Moffat and Justin Zobel (to appear) Can Machine Translation Systems be Evaluated by the Crowd Alone? Natural Language Engineering
Crowd-sourced assessments of machine translation quality allow evaluations to be carried out cheaply and on a large scale. It is essential, however, that the crowd’s work be filtered to avoid contamination of results through the inclusion of false assessments. One method is to filter via agreement with experts, but even amongst experts agreement levels may not be high. In this paper, we present...
متن کاملCan Translation Be Mechanized ?
A GOOD translation of a novel requires many qualities from the translator. He must be a native speaker or have at least native-like command of the language into which he translates, the target language; he must have a good knowledge of the language from which he translates, the source language, and preferably have lived among the people who habitually speak the source language; he must have som...
متن کاملEvaluationNet: Can Human Skill be Evaluated by Deep Networks?
With the recent substantial growth of media such as YouTube, a considerable number of instructional videos covering a wide variety of tasks are available online. Therefore, online instructional videos have become a rich resource for humans to learn everyday skills. In order to improve the effectiveness of the learning with instructional video, observation and evaluation of the activity are requ...
متن کاملCan Economic Development Programs Be Evaluated?
The question addressed in this paper seems simple: Can economic development programs be evaluated? But the answer is not simple because of the nature of evaluation. To determine a program's effectiveness requires a sophisticated evaluation because it requires the evaluator to distinguish changes due to the program from changes due to nonprogram factors. The evaluator must focus on the outcomes ...
متن کاملGetting Expert Quality from the Crowd for Machine Translation Evaluation
This paper addresses the manual evaluation of Machine Translation (MT) quality by means of crowdsourcing. To this purpose, we replicated the ranking evaluation of the ArabicEnglish BTEC task proposed at the IWSLT 2010 Workshop by hiring non-experts through the CrowdFlower interface to Amazon’s Mechanical Turk. In particular, we investigated the effectiveness of “gold units” offered by CrowdFlow...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Natural Language Engineering
سال: 2015
ISSN: 1351-3249,1469-8110
DOI: 10.1017/s1351324915000339